This project focuses on designing and optimizing a convolutional neural network (CNN) for the classification of political memes into conservative and liberal viewpoints, aiming to achieve the highest possible accuracy on a validation set. The process includes employing techniques like early stopping and model checkpointing to monitor performance, and making adjustments to the CNN architecture based on changes in accuracy related to hyperparameters such as filters and layers. The task emphasizes reproducibility through a pre-determined data split and involves the creation of separate data generators for training, validation, and testing. The project’s success will be ultimately evaluated on an independent test set, with learning curves and detailed analysis included in the final report.
For this project, the image dataset was sourced by exploring meme-focused pages on social platforms such as Reddit, Facebook, and Pinterest. A total of 1,000 images were collected, with an equal distribution between conservative and liberal political themes.
Special thanks to Kate Arendes for contributions to the collection process.
import PIL
import numpy as np
from PIL import Image
from keras import layers
from tensorflow import keras
from keras import regularizers
from google.colab import drive
import matplotlib.pyplot as plt
from keras.metrics import Precision
from keras.preprocessing.image import ImageDataGenerator
First, lets download the images from google drive:# Let's mount the drive to load the images
drive.mount('/content/drive')
Mounted at /content/drive
# Let's set the base directory for loading the political meme images
base_directory = "/content/drive/My Drive/Political Meme Dataset/"
# Let's initialize the ImageDataGenerator with rescaling to normalize pixel values
my_generator = ImageDataGenerator(rescale=1./255)
# Let's set up the training data generator
# This loads images of size 150x150, in batches of 4, with binary class labels
train_generator = my_generator.flow_from_directory(
f"{base_directory}/training/",
target_size=(150, 150),
batch_size=4,
class_mode='binary'
)
# Let's set up the validation data generator
# Loads images of the same size and batch size as the training generator
valid_generator = my_generator.flow_from_directory(
f"{base_directory}/validation/",
target_size=(150, 150),
batch_size=4,
class_mode='binary'
)
# Let's set up the test data generator
# Uses the same parameters for consistency across training, validation, and testing
test_generator = my_generator.flow_from_directory(
f"{base_directory}/test/",
target_size=(150, 150),
batch_size=4,
class_mode='binary'
)
Found 600 images belonging to 2 classes. Found 200 images belonging to 2 classes. Found 200 images belonging to 2 classes.
# Let's load a single image using PIL library.
image = Image.open(f"{base_directory}/training/train_liberal/0f76446d7d65a9e6508a226ae33e8a51--felder-donald-oconnor.jpg")
# Let's get some details about the image.
print("Image Mode -->", image.mode)
print("Image Format --> ", image.format)
print("Image Size -->", image.size)
Image Mode --> RGB Image Format --> JPEG Image Size --> (118, 108)
# Let's display the colored image
plt.imshow(np.asarray(image))
plt.colorbar()
<matplotlib.colorbar.Colorbar at 0x7f14dbb3e650>
# Let's convert the input image to grayscale
gs_image = image.convert(mode='L')
# Let's display the grayscale image using matplotlib
plt.imshow(np.asarray(gs_image), cmap='gray')
<matplotlib.image.AxesImage at 0x7f14dba3bd30>
# Let's resize the image to 200x200 pixels
img_resized = image.resize((200,200))
# Let's print the size of the resized image to verify the new dimensions
print(img_resized.size)
# Let's display the resized image using matplotlib
plt.imshow(np.asarray(img_resized))
(200, 200)
<matplotlib.image.AxesImage at 0x7f14dbad0c10>
# Let's loop through batches of images from the train generator
for my_batch in train_generator:
images = my_batch[0]
labels = my_batch[1]
# Let's iterate over each image and its corresponding label in the batch
for i in range(len(labels)):
plt.imshow(images[i])
plt.colorbar()
plt.show()
# Let's print the label associated with the image
print(labels[i])
break
1.0
0.0
0.0
0.0
# Let's loop through batches of images from the validation generator
for my_batch in valid_generator:
images = my_batch[0]
labels = my_batch[1]
# Let's iterate over each image and its corresponding label in the batch
for i in range(len(labels)):
plt.imshow(images[i])
plt.colorbar()
plt.show()
# Let's print the label associated with the image
print(labels[i])
break
1.0
0.0
0.0
1.0
# Let's loop through batches of images from the test generator
for my_batch in test_generator:
images = my_batch[0]
labels = my_batch[1]
# Let's iterate over each image and its corresponding label in the batch
for i in range(len(labels)):
plt.imshow(images[i])
plt.colorbar()
plt.show()
# Let's print the label associated with the image
print(labels[i])
break
0.0
0.0
0.0
0.0
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model = keras.Model(inputs=inputs, outputs=outputs)
model.summary()
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 150, 150, 3)] 0
conv2d_5 (Conv2D) (None, 150, 150, 32) 896
max_pooling2d_5 (MaxPoolin (None, 75, 75, 32) 0
g2D)
conv2d_6 (Conv2D) (None, 75, 75, 64) 18496
max_pooling2d_6 (MaxPoolin (None, 37, 37, 64) 0
g2D)
conv2d_7 (Conv2D) (None, 37, 37, 128) 73856
max_pooling2d_7 (MaxPoolin (None, 18, 18, 128) 0
g2D)
conv2d_8 (Conv2D) (None, 18, 18, 128) 147584
max_pooling2d_8 (MaxPoolin (None, 9, 9, 128) 0
g2D)
conv2d_9 (Conv2D) (None, 9, 9, 256) 295168
max_pooling2d_9 (MaxPoolin (None, 4, 4, 256) 0
g2D)
global_average_pooling2d_1 (None, 256) 0
(GlobalAveragePooling2D)
dropout_1 (Dropout) (None, 256) 0
dense_1 (Dense) (None, 1) 257
=================================================================
Total params: 536257 (2.05 MB)
Trainable params: 536257 (2.05 MB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history = model.fit(train_generator, validation_data = valid_generator, epochs = 10, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/10 150/150 [==============================] - 7s 36ms/step - loss: 0.6890 - accuracy: 0.5767 - precision_1: 0.5821 - val_loss: 0.6557 - val_accuracy: 0.6200 - val_precision_1: 0.5682 Epoch 2/10 150/150 [==============================] - 5s 32ms/step - loss: 0.5482 - accuracy: 0.7567 - precision_1: 0.7188 - val_loss: 0.3334 - val_accuracy: 0.9400 - val_precision_1: 0.8929 Epoch 3/10 150/150 [==============================] - 5s 32ms/step - loss: 0.4263 - accuracy: 0.8567 - precision_1: 0.8242 - val_loss: 0.2132 - val_accuracy: 0.9450 - val_precision_1: 0.9159 Epoch 4/10 150/150 [==============================] - 5s 32ms/step - loss: 0.3202 - accuracy: 0.8750 - precision_1: 0.8462 - val_loss: 0.2123 - val_accuracy: 0.9050 - val_precision_1: 0.9551 Epoch 5/10 150/150 [==============================] - 5s 33ms/step - loss: 0.2947 - accuracy: 0.8983 - precision_1: 0.8842 - val_loss: 0.1506 - val_accuracy: 0.9550 - val_precision_1: 0.9417 Epoch 6/10 150/150 [==============================] - 4s 25ms/step - loss: 0.2917 - accuracy: 0.8917 - precision_1: 0.8660 - val_loss: 0.2518 - val_accuracy: 0.8800 - val_precision_1: 0.8065 Epoch 7/10 150/150 [==============================] - 4s 25ms/step - loss: 0.2616 - accuracy: 0.9067 - precision_1: 0.9013 - val_loss: 0.2803 - val_accuracy: 0.8700 - val_precision_1: 0.7937 Epoch 8/10 150/150 [==============================] - 4s 24ms/step - loss: 0.2586 - accuracy: 0.9033 - precision_1: 0.8854 - val_loss: 0.2528 - val_accuracy: 0.8850 - val_precision_1: 0.8130 Epoch 9/10 150/150 [==============================] - 4s 25ms/step - loss: 0.2721 - accuracy: 0.8867 - precision_1: 0.8718 - val_loss: 0.1694 - val_accuracy: 0.9350 - val_precision_1: 0.8850 Epoch 10/10 150/150 [==============================] - 4s 26ms/step - loss: 0.2068 - accuracy: 0.9167 - precision_1: 0.9058 - val_loss: 0.1603 - val_accuracy: 0.9400 - val_precision_1: 0.8929
train_accuracy = history.history["accuracy"]
train_loss = history.history["loss"]
train_precision = history.history["precision_1"]
val_accuracy = history.history["val_accuracy"]
val_loss = history.history["val_loss"]
val_precision = history.history["val_precision_1"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 73s 1s/step - loss: 0.2016 - accuracy: 0.9500 - precision_1: 0.9245
[0.20159845054149628, 0.949999988079071, 0.9245283007621765]
The initial model architecture resulted in a training accuracy of 0.9167 and a validation accuracy of 0.94 after being trained for 10 epochs. The next steps include increasing the number of epochs to 50 to observe how the training and validation accuracies change across different epochs. This extended training period will help determine if the model is benefiting from more training time or if it begins to overfit the training data. Observing the trend in validation accuracy will also indicate whether the model generalizes well to unseen data. Additional measures, such as implementing early stopping or adjusting the learning rate, may be considered based on the outcomes observed at different epochs.
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_increase_epochs = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_increase_epochs.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="base_model_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_increase_epochs = model_increase_epochs.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 34ms/step - loss: 0.6025 - accuracy: 0.6800 - precision_2: 0.7673 - val_loss: 0.2778 - val_accuracy: 0.9400 - val_precision_2: 0.9314 Epoch 2/30 150/150 [==============================] - 4s 25ms/step - loss: 0.4617 - accuracy: 0.8133 - precision_2: 0.7749 - val_loss: 0.2888 - val_accuracy: 0.9550 - val_precision_2: 0.9333 Epoch 3/30 150/150 [==============================] - 5s 32ms/step - loss: 0.4052 - accuracy: 0.8600 - precision_2: 0.8396 - val_loss: 0.1892 - val_accuracy: 0.9550 - val_precision_2: 0.9417 Epoch 4/30 150/150 [==============================] - 5s 34ms/step - loss: 0.3427 - accuracy: 0.8683 - precision_2: 0.8421 - val_loss: 0.1826 - val_accuracy: 0.9400 - val_precision_2: 0.8929 Epoch 5/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2917 - accuracy: 0.8833 - precision_2: 0.8571 - val_loss: 0.3593 - val_accuracy: 0.8400 - val_precision_2: 0.9722 Epoch 6/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3290 - accuracy: 0.8750 - precision_2: 0.8571 - val_loss: 0.2781 - val_accuracy: 0.8900 - val_precision_2: 0.8197 Epoch 7/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2959 - accuracy: 0.8983 - precision_2: 0.8794 - val_loss: 0.1890 - val_accuracy: 0.9350 - val_precision_2: 0.9394 Epoch 8/30 150/150 [==============================] - 5s 32ms/step - loss: 0.3395 - accuracy: 0.8550 - precision_2: 0.8257 - val_loss: 0.1690 - val_accuracy: 0.9700 - val_precision_2: 0.9519 Epoch 9/30 150/150 [==============================] - 5s 32ms/step - loss: 0.2545 - accuracy: 0.9167 - precision_2: 0.8956 - val_loss: 0.1290 - val_accuracy: 0.9550 - val_precision_2: 0.9174 Epoch 10/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2300 - accuracy: 0.9133 - precision_2: 0.9000 - val_loss: 0.1414 - val_accuracy: 0.9400 - val_precision_2: 0.8929 Epoch 11/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2220 - accuracy: 0.9167 - precision_2: 0.9058 - val_loss: 0.1166 - val_accuracy: 0.9700 - val_precision_2: 0.9519 Epoch 12/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1804 - accuracy: 0.9317 - precision_2: 0.9191 - val_loss: 0.2790 - val_accuracy: 0.8800 - val_precision_2: 0.8065 Epoch 13/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1969 - accuracy: 0.9233 - precision_2: 0.9097 - val_loss: 0.1492 - val_accuracy: 0.9500 - val_precision_2: 0.9091 Epoch 14/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1829 - accuracy: 0.9300 - precision_2: 0.9188 - val_loss: 0.1068 - val_accuracy: 0.9650 - val_precision_2: 0.9429 Epoch 15/30 150/150 [==============================] - 5s 34ms/step - loss: 0.1340 - accuracy: 0.9567 - precision_2: 0.9536 - val_loss: 0.1021 - val_accuracy: 0.9600 - val_precision_2: 0.9340 Epoch 16/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1566 - accuracy: 0.9467 - precision_2: 0.9408 - val_loss: 0.3301 - val_accuracy: 0.8950 - val_precision_2: 0.8264 Epoch 17/30 150/150 [==============================] - 4s 27ms/step - loss: 0.1869 - accuracy: 0.9333 - precision_2: 0.9248 - val_loss: 0.1835 - val_accuracy: 0.9350 - val_precision_2: 0.8850 Epoch 18/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1433 - accuracy: 0.9483 - precision_2: 0.9529 - val_loss: 0.1376 - val_accuracy: 0.9400 - val_precision_2: 0.8929 Epoch 19/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1453 - accuracy: 0.9467 - precision_2: 0.9379 - val_loss: 0.1894 - val_accuracy: 0.9350 - val_precision_2: 0.8850 Epoch 20/30 150/150 [==============================] - 5s 36ms/step - loss: 0.1103 - accuracy: 0.9517 - precision_2: 0.9593 - val_loss: 0.0948 - val_accuracy: 0.9750 - val_precision_2: 0.9798 Epoch 21/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0834 - accuracy: 0.9667 - precision_2: 0.9667 - val_loss: 0.2334 - val_accuracy: 0.9250 - val_precision_2: 0.8696 Epoch 22/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0864 - accuracy: 0.9650 - precision_2: 0.9604 - val_loss: 0.1216 - val_accuracy: 0.9500 - val_precision_2: 0.9245 Epoch 23/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0735 - accuracy: 0.9750 - precision_2: 0.9703 - val_loss: 0.1159 - val_accuracy: 0.9650 - val_precision_2: 0.9346 Epoch 24/30 150/150 [==============================] - 5s 33ms/step - loss: 0.0852 - accuracy: 0.9617 - precision_2: 0.9632 - val_loss: 0.0707 - val_accuracy: 0.9700 - val_precision_2: 0.9434 Epoch 25/30 150/150 [==============================] - 5s 33ms/step - loss: 0.0523 - accuracy: 0.9833 - precision_2: 0.9801 - val_loss: 0.0654 - val_accuracy: 0.9800 - val_precision_2: 0.9800 Epoch 26/30 150/150 [==============================] - 4s 27ms/step - loss: 0.0563 - accuracy: 0.9783 - precision_2: 0.9767 - val_loss: 0.1835 - val_accuracy: 0.9500 - val_precision_2: 0.9091 Epoch 27/30 150/150 [==============================] - 4s 24ms/step - loss: 0.0925 - accuracy: 0.9617 - precision_2: 0.9601 - val_loss: 0.1404 - val_accuracy: 0.9500 - val_precision_2: 0.9167 Epoch 28/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0781 - accuracy: 0.9733 - precision_2: 0.9702 - val_loss: 0.0910 - val_accuracy: 0.9700 - val_precision_2: 0.9434 Epoch 29/30 150/150 [==============================] - 4s 27ms/step - loss: 0.0469 - accuracy: 0.9817 - precision_2: 0.9801 - val_loss: 0.1405 - val_accuracy: 0.9550 - val_precision_2: 0.9252 Epoch 30/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0180 - accuracy: 0.9983 - precision_2: 0.9967 - val_loss: 0.1724 - val_accuracy: 0.9650 - val_precision_2: 0.9429
train_accuracy = history_increase_epochs.history["accuracy"]
train_loss = history_increase_epochs.history["loss"]
train_precision = history_increase_epochs.history["precision_2"]
val_accuracy = history_increase_epochs.history["val_accuracy"]
val_loss = history_increase_epochs.history["val_loss"]
val_precision = history_increase_epochs.history["val_precision_2"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
best_model = keras.models.load_model("base_model_checkpoint_filepath")
best_model.evaluate(test_generator)
50/50 [==============================] - 1s 16ms/step - loss: 0.1484 - accuracy: 0.9550 - precision_2: 0.9333
[0.14840058982372284, 0.9549999833106995, 0.9333333373069763]
Increasing the number of epochs from 10 to 30 resulted in an increase in training accuracy from 0.9169 to 0.9983. Similarly, the validation accuracy increased from 0.94 to 0.97. As expected, the time required for training increased significantly from 3 minutes to 5 minutes due to the higher number of epochs. This indicates that the model benefited from additional training, as evidenced by the improvements in both training and validation accuracies.
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(128, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_decrease_layers = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_decrease_layers.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="decrease_layers_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_decrease_layers = model_decrease_layers.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 36ms/step - loss: 0.6384 - accuracy: 0.6117 - precision_6: 0.6683 - val_loss: 0.3889 - val_accuracy: 0.8500 - val_precision_6: 0.9268 Epoch 2/30 150/150 [==============================] - 5s 31ms/step - loss: 0.5383 - accuracy: 0.8133 - precision_6: 0.7848 - val_loss: 0.2210 - val_accuracy: 0.9300 - val_precision_6: 0.9216 Epoch 3/30 150/150 [==============================] - 4s 25ms/step - loss: 0.4117 - accuracy: 0.8483 - precision_6: 0.8047 - val_loss: 0.3755 - val_accuracy: 0.8250 - val_precision_6: 0.7407 Epoch 4/30 150/150 [==============================] - 5s 31ms/step - loss: 0.3692 - accuracy: 0.8600 - precision_6: 0.8214 - val_loss: 0.1845 - val_accuracy: 0.9300 - val_precision_6: 0.9574 Epoch 5/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3661 - accuracy: 0.8850 - precision_6: 0.8468 - val_loss: 0.1889 - val_accuracy: 0.9600 - val_precision_6: 0.9423 Epoch 6/30 150/150 [==============================] - 4s 25ms/step - loss: 0.3195 - accuracy: 0.8883 - precision_6: 0.8585 - val_loss: 0.2010 - val_accuracy: 0.9350 - val_precision_6: 0.8850 Epoch 7/30 150/150 [==============================] - 5s 31ms/step - loss: 0.3544 - accuracy: 0.8900 - precision_6: 0.8503 - val_loss: 0.1639 - val_accuracy: 0.9700 - val_precision_6: 0.9608 Epoch 8/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3111 - accuracy: 0.8867 - precision_6: 0.8558 - val_loss: 0.3436 - val_accuracy: 0.8400 - val_precision_6: 0.7576 Epoch 9/30 150/150 [==============================] - 5s 31ms/step - loss: 0.2674 - accuracy: 0.9050 - precision_6: 0.8932 - val_loss: 0.1489 - val_accuracy: 0.9450 - val_precision_6: 0.9495 Epoch 10/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2787 - accuracy: 0.8900 - precision_6: 0.8634 - val_loss: 0.1836 - val_accuracy: 0.9600 - val_precision_6: 0.9340 Epoch 11/30 150/150 [==============================] - 5s 31ms/step - loss: 0.2507 - accuracy: 0.9100 - precision_6: 0.8968 - val_loss: 0.1213 - val_accuracy: 0.9700 - val_precision_6: 0.9519 Epoch 12/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2745 - accuracy: 0.8967 - precision_6: 0.8889 - val_loss: 0.2140 - val_accuracy: 0.9050 - val_precision_6: 0.8403 Epoch 13/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2572 - accuracy: 0.9083 - precision_6: 0.8889 - val_loss: 0.1525 - val_accuracy: 0.9600 - val_precision_6: 0.9259 Epoch 14/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2253 - accuracy: 0.9183 - precision_6: 0.9088 - val_loss: 0.1747 - val_accuracy: 0.9350 - val_precision_6: 0.8850 Epoch 15/30 150/150 [==============================] - 5s 31ms/step - loss: 0.2156 - accuracy: 0.9133 - precision_6: 0.8949 - val_loss: 0.1048 - val_accuracy: 0.9650 - val_precision_6: 0.9515 Epoch 16/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2030 - accuracy: 0.9300 - precision_6: 0.9135 - val_loss: 0.1429 - val_accuracy: 0.9400 - val_precision_6: 0.8929 Epoch 17/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1842 - accuracy: 0.9283 - precision_6: 0.9186 - val_loss: 0.1763 - val_accuracy: 0.9100 - val_precision_6: 0.8475 Epoch 18/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2090 - accuracy: 0.9183 - precision_6: 0.8959 - val_loss: 0.2044 - val_accuracy: 0.9000 - val_precision_6: 0.8333 Epoch 19/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1977 - accuracy: 0.9233 - precision_6: 0.9097 - val_loss: 0.1396 - val_accuracy: 0.9550 - val_precision_6: 0.9174 Epoch 20/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2096 - accuracy: 0.9217 - precision_6: 0.9148 - val_loss: 0.1372 - val_accuracy: 0.9700 - val_precision_6: 0.9434 Epoch 21/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1763 - accuracy: 0.9383 - precision_6: 0.9283 - val_loss: 0.1483 - val_accuracy: 0.9450 - val_precision_6: 0.9009 Epoch 22/30 150/150 [==============================] - 5s 31ms/step - loss: 0.1603 - accuracy: 0.9383 - precision_6: 0.9311 - val_loss: 0.0925 - val_accuracy: 0.9700 - val_precision_6: 0.9434 Epoch 23/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1325 - accuracy: 0.9533 - precision_6: 0.9444 - val_loss: 0.1019 - val_accuracy: 0.9650 - val_precision_6: 0.9346 Epoch 24/30 150/150 [==============================] - 5s 34ms/step - loss: 0.1211 - accuracy: 0.9533 - precision_6: 0.9474 - val_loss: 0.0873 - val_accuracy: 0.9700 - val_precision_6: 0.9608 Epoch 25/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1849 - accuracy: 0.9333 - precision_6: 0.9167 - val_loss: 0.1551 - val_accuracy: 0.9300 - val_precision_6: 0.8772 Epoch 26/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1315 - accuracy: 0.9467 - precision_6: 0.9408 - val_loss: 0.1343 - val_accuracy: 0.9450 - val_precision_6: 0.9238 Epoch 27/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1170 - accuracy: 0.9617 - precision_6: 0.9601 - val_loss: 0.1024 - val_accuracy: 0.9650 - val_precision_6: 0.9346 Epoch 28/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0982 - accuracy: 0.9650 - precision_6: 0.9574 - val_loss: 0.2915 - val_accuracy: 0.8850 - val_precision_6: 0.8130 Epoch 29/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0881 - accuracy: 0.9667 - precision_6: 0.9605 - val_loss: 0.1394 - val_accuracy: 0.9400 - val_precision_6: 0.8929 Epoch 30/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0770 - accuracy: 0.9683 - precision_6: 0.9637 - val_loss: 0.1180 - val_accuracy: 0.9500 - val_precision_6: 0.9500
train_accuracy = history_decrease_layers.history["accuracy"]
train_loss = history_decrease_layers.history["loss"]
train_precision = history_decrease_layers.history["precision_6"]
val_accuracy = history_decrease_layers.history["val_accuracy"]
val_loss = history_decrease_layers.history["val_loss"]
val_precision = history_decrease_layers.history["val_precision_6"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("decrease_layers_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 15ms/step - loss: 0.1241 - accuracy: 0.9400 - precision_6: 0.9000
[0.12408556789159775, 0.9399999976158142, 0.8999999761581421]
Surprisingly, decreasing one convolution and pooling layers by resulted in a training accuracy of 0.9683 and a validation accuracy of 0.95. The time it took to complete training and validation for 30 epochs was almost 3 minutes. The test performance is similar to the configuration with more layers; the accuracy with fewer layers is 0.94, while the accuracy with more layers is 0.9550.
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_increased_filters = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_increased_filters.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="increase_filters_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_increased_filters = model_increased_filters.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 35ms/step - loss: 0.6311 - accuracy: 0.6733 - precision_9: 0.6711 - val_loss: 0.3597 - val_accuracy: 0.9000 - val_precision_9: 0.8448 Epoch 2/30 150/150 [==============================] - 4s 25ms/step - loss: 0.5611 - accuracy: 0.7550 - precision_9: 0.7029 - val_loss: 0.6817 - val_accuracy: 0.5150 - val_precision_9: 0.5076 Epoch 3/30 150/150 [==============================] - 4s 26ms/step - loss: 0.6183 - accuracy: 0.7517 - precision_9: 0.6921 - val_loss: 0.4809 - val_accuracy: 0.8200 - val_precision_9: 0.7353 Epoch 4/30 150/150 [==============================] - 5s 35ms/step - loss: 0.5312 - accuracy: 0.7850 - precision_9: 0.7221 - val_loss: 0.2999 - val_accuracy: 0.9100 - val_precision_9: 0.9457 Epoch 5/30 150/150 [==============================] - 5s 33ms/step - loss: 0.4459 - accuracy: 0.8367 - precision_9: 0.7953 - val_loss: 0.2461 - val_accuracy: 0.9050 - val_precision_9: 0.8403 Epoch 6/30 150/150 [==============================] - 5s 33ms/step - loss: 0.3995 - accuracy: 0.8500 - precision_9: 0.8125 - val_loss: 0.1975 - val_accuracy: 0.9450 - val_precision_9: 0.9406 Epoch 7/30 150/150 [==============================] - 5s 33ms/step - loss: 0.3167 - accuracy: 0.8800 - precision_9: 0.8608 - val_loss: 0.1454 - val_accuracy: 0.9600 - val_precision_9: 0.9259 Epoch 8/30 150/150 [==============================] - 5s 34ms/step - loss: 0.2897 - accuracy: 0.8883 - precision_9: 0.8698 - val_loss: 0.1401 - val_accuracy: 0.9550 - val_precision_9: 0.9174 Epoch 9/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2840 - accuracy: 0.9033 - precision_9: 0.8878 - val_loss: 0.2197 - val_accuracy: 0.9100 - val_precision_9: 0.8475 Epoch 10/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3070 - accuracy: 0.8967 - precision_9: 0.8719 - val_loss: 0.1498 - val_accuracy: 0.9600 - val_precision_9: 0.9340 Epoch 11/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2452 - accuracy: 0.9083 - precision_9: 0.8990 - val_loss: 0.0774 - val_accuracy: 0.9850 - val_precision_9: 0.9709 Epoch 12/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2598 - accuracy: 0.9200 - precision_9: 0.9145 - val_loss: 0.1180 - val_accuracy: 0.9800 - val_precision_9: 0.9615 Epoch 13/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2308 - accuracy: 0.9200 - precision_9: 0.9013 - val_loss: 0.1339 - val_accuracy: 0.9550 - val_precision_9: 0.9174 Epoch 14/30 150/150 [==============================] - 4s 26ms/step - loss: 0.2041 - accuracy: 0.9233 - precision_9: 0.9123 - val_loss: 0.1062 - val_accuracy: 0.9700 - val_precision_9: 0.9434 Epoch 15/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1899 - accuracy: 0.9300 - precision_9: 0.9216 - val_loss: 0.0901 - val_accuracy: 0.9750 - val_precision_9: 0.9524 Epoch 16/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1939 - accuracy: 0.9317 - precision_9: 0.9274 - val_loss: 0.0812 - val_accuracy: 0.9800 - val_precision_9: 0.9706 Epoch 17/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1993 - accuracy: 0.9317 - precision_9: 0.9302 - val_loss: 0.0855 - val_accuracy: 0.9700 - val_precision_9: 0.9434 Epoch 18/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1534 - accuracy: 0.9467 - precision_9: 0.9379 - val_loss: 0.1325 - val_accuracy: 0.9600 - val_precision_9: 0.9259 Epoch 19/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1672 - accuracy: 0.9383 - precision_9: 0.9340 - val_loss: 0.2987 - val_accuracy: 0.9100 - val_precision_9: 0.8475 Epoch 20/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1494 - accuracy: 0.9433 - precision_9: 0.9346 - val_loss: 0.1611 - val_accuracy: 0.9350 - val_precision_9: 0.8919 Epoch 21/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1508 - accuracy: 0.9517 - precision_9: 0.9502 - val_loss: 0.2378 - val_accuracy: 0.8950 - val_precision_9: 0.8264 Epoch 22/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1223 - accuracy: 0.9500 - precision_9: 0.9470 - val_loss: 0.1070 - val_accuracy: 0.9600 - val_precision_9: 0.9340 Epoch 23/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1350 - accuracy: 0.9533 - precision_9: 0.9474 - val_loss: 0.1106 - val_accuracy: 0.9600 - val_precision_9: 0.9259 Epoch 24/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1030 - accuracy: 0.9650 - precision_9: 0.9604 - val_loss: 0.0970 - val_accuracy: 0.9750 - val_precision_9: 0.9524 Epoch 25/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0725 - accuracy: 0.9767 - precision_9: 0.9735 - val_loss: 0.2406 - val_accuracy: 0.9100 - val_precision_9: 0.8475 Epoch 26/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1095 - accuracy: 0.9667 - precision_9: 0.9667 - val_loss: 0.2239 - val_accuracy: 0.8900 - val_precision_9: 0.8197 Epoch 27/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0797 - accuracy: 0.9767 - precision_9: 0.9735 - val_loss: 0.1151 - val_accuracy: 0.9400 - val_precision_9: 0.8929 Epoch 28/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0820 - accuracy: 0.9750 - precision_9: 0.9642 - val_loss: 0.1602 - val_accuracy: 0.9500 - val_precision_9: 0.9091 Epoch 29/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1663 - accuracy: 0.9467 - precision_9: 0.9379 - val_loss: 0.0975 - val_accuracy: 0.9650 - val_precision_9: 0.9346 Epoch 30/30 150/150 [==============================] - 5s 33ms/step - loss: 0.0960 - accuracy: 0.9667 - precision_9: 0.9575 - val_loss: 0.0504 - val_accuracy: 0.9750 - val_precision_9: 0.9524
train_accuracy = history_increased_filters.history["accuracy"]
train_loss = history_increased_filters.history["loss"]
train_precision = history_increased_filters.history["precision_9"]
val_accuracy = history_increased_filters.history["val_accuracy"]
val_loss = history_increased_filters.history["val_loss"]
val_precision = history_increased_filters.history["val_precision_9"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("increase_filters_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 17ms/step - loss: 0.1671 - accuracy: 0.9450 - precision_9: 0.9009
[0.1671462506055832, 0.9449999928474426, 0.9009009003639221]
Surprisingly, increasing the number of convolution filters from 128 to 256 in two convolution layers resulted in a training accuracy of 0.9667 and a validation accuracy of 0.9750. The time it took to complete training and validation for 30 epochs was close to 3 minutes. The test performance is same to the configuration with more filters; the accuracy with fewer filters is 0.9450, while the accuracy with base configuration is 0.9550.
# Define the input shape and number of classes
input_shape = (100, 100, 3)
num_classes = 2
# Start defining the model
inputs = keras.Input(shape=input_shape)
x = layers.Conv2D(32, 3, padding='same', activation='relu')(inputs)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(64, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Adding a couple more Conv2D and MaxPooling2D layers
x = layers.Conv2D(256, 3, padding='same', activation='relu')(x)
x = layers.MaxPooling2D(pool_size=2)(x)
# Global Average Pooling followed by the classifier
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Common dropout rate for regularization
# Output layer
outputs = layers.Dense(1, activation='sigmoid')(x)
# Finalize the model
model_decreased_image_size = keras.Model(inputs=inputs, outputs=outputs)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
model_decreased_image_size.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="decrease_image_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_decreased_image_size = model_decreased_image_size.fit(train_generator, validation_data = valid_generator, epochs = 30, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/30 150/150 [==============================] - 7s 35ms/step - loss: 0.6771 - accuracy: 0.5433 - precision_10: 0.5684 - val_loss: 0.4984 - val_accuracy: 0.8500 - val_precision_10: 0.9487 Epoch 2/30 150/150 [==============================] - 4s 25ms/step - loss: 0.6645 - accuracy: 0.6917 - precision_10: 0.6448 - val_loss: 0.6203 - val_accuracy: 0.8000 - val_precision_10: 0.8947 Epoch 3/30 150/150 [==============================] - 5s 33ms/step - loss: 0.5489 - accuracy: 0.7933 - precision_10: 0.7588 - val_loss: 0.2703 - val_accuracy: 0.9100 - val_precision_10: 0.8475 Epoch 4/30 150/150 [==============================] - 4s 25ms/step - loss: 0.4003 - accuracy: 0.8517 - precision_10: 0.8187 - val_loss: 0.3547 - val_accuracy: 0.8450 - val_precision_10: 0.7674 Epoch 5/30 150/150 [==============================] - 5s 33ms/step - loss: 0.3474 - accuracy: 0.8750 - precision_10: 0.8483 - val_loss: 0.2459 - val_accuracy: 0.9100 - val_precision_10: 0.8534 Epoch 6/30 150/150 [==============================] - 5s 35ms/step - loss: 0.3309 - accuracy: 0.8717 - precision_10: 0.8431 - val_loss: 0.1803 - val_accuracy: 0.9400 - val_precision_10: 0.9400 Epoch 7/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2782 - accuracy: 0.8967 - precision_10: 0.8719 - val_loss: 0.1684 - val_accuracy: 0.9350 - val_precision_10: 0.9780 Epoch 8/30 150/150 [==============================] - 4s 26ms/step - loss: 0.3530 - accuracy: 0.8683 - precision_10: 0.8421 - val_loss: 0.1783 - val_accuracy: 0.9550 - val_precision_10: 0.9174 Epoch 9/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2585 - accuracy: 0.9150 - precision_10: 0.9055 - val_loss: 0.1725 - val_accuracy: 0.9350 - val_precision_10: 0.8850 Epoch 10/30 150/150 [==============================] - 5s 33ms/step - loss: 0.2449 - accuracy: 0.9083 - precision_10: 0.8964 - val_loss: 0.1183 - val_accuracy: 0.9750 - val_precision_10: 0.9524 Epoch 11/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2045 - accuracy: 0.9217 - precision_10: 0.9175 - val_loss: 0.2540 - val_accuracy: 0.8950 - val_precision_10: 0.8264 Epoch 12/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2040 - accuracy: 0.9250 - precision_10: 0.9264 - val_loss: 0.2036 - val_accuracy: 0.9100 - val_precision_10: 0.8475 Epoch 13/30 150/150 [==============================] - 4s 25ms/step - loss: 0.2061 - accuracy: 0.9217 - precision_10: 0.9121 - val_loss: 0.1259 - val_accuracy: 0.9550 - val_precision_10: 0.9174 Epoch 14/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1878 - accuracy: 0.9333 - precision_10: 0.9392 - val_loss: 0.1239 - val_accuracy: 0.9650 - val_precision_10: 0.9346 Epoch 15/30 150/150 [==============================] - 5s 32ms/step - loss: 0.1955 - accuracy: 0.9250 - precision_10: 0.9293 - val_loss: 0.0922 - val_accuracy: 0.9700 - val_precision_10: 0.9434 Epoch 16/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1650 - accuracy: 0.9450 - precision_10: 0.9406 - val_loss: 0.1260 - val_accuracy: 0.9650 - val_precision_10: 0.9515 Epoch 17/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1498 - accuracy: 0.9467 - precision_10: 0.9467 - val_loss: 0.1421 - val_accuracy: 0.9750 - val_precision_10: 0.9612 Epoch 18/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1711 - accuracy: 0.9400 - precision_10: 0.9314 - val_loss: 0.1141 - val_accuracy: 0.9600 - val_precision_10: 0.9259 Epoch 19/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1386 - accuracy: 0.9533 - precision_10: 0.9533 - val_loss: 0.1124 - val_accuracy: 0.9700 - val_precision_10: 0.9519 Epoch 20/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1379 - accuracy: 0.9483 - precision_10: 0.9410 - val_loss: 0.0620 - val_accuracy: 0.9750 - val_precision_10: 0.9524 Epoch 21/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1545 - accuracy: 0.9433 - precision_10: 0.9375 - val_loss: 0.2216 - val_accuracy: 0.9100 - val_precision_10: 0.8475 Epoch 22/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1055 - accuracy: 0.9550 - precision_10: 0.9565 - val_loss: 0.0851 - val_accuracy: 0.9650 - val_precision_10: 0.9429 Epoch 23/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1115 - accuracy: 0.9600 - precision_10: 0.9600 - val_loss: 0.0859 - val_accuracy: 0.9650 - val_precision_10: 0.9346 Epoch 24/30 150/150 [==============================] - 4s 24ms/step - loss: 0.0806 - accuracy: 0.9733 - precision_10: 0.9671 - val_loss: 0.1564 - val_accuracy: 0.9600 - val_precision_10: 0.9259 Epoch 25/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0836 - accuracy: 0.9700 - precision_10: 0.9669 - val_loss: 0.1628 - val_accuracy: 0.9550 - val_precision_10: 0.9174 Epoch 26/30 150/150 [==============================] - 4s 26ms/step - loss: 0.0939 - accuracy: 0.9700 - precision_10: 0.9638 - val_loss: 0.0989 - val_accuracy: 0.9750 - val_precision_10: 0.9524 Epoch 27/30 150/150 [==============================] - 4s 25ms/step - loss: 0.0828 - accuracy: 0.9700 - precision_10: 0.9608 - val_loss: 0.1323 - val_accuracy: 0.9600 - val_precision_10: 0.9259 Epoch 28/30 150/150 [==============================] - 4s 25ms/step - loss: 0.1188 - accuracy: 0.9583 - precision_10: 0.9421 - val_loss: 0.1243 - val_accuracy: 0.9400 - val_precision_10: 0.8929 Epoch 29/30 150/150 [==============================] - 4s 26ms/step - loss: 0.1644 - accuracy: 0.9500 - precision_10: 0.9441 - val_loss: 0.0734 - val_accuracy: 0.9800 - val_precision_10: 0.9615 Epoch 30/30 150/150 [==============================] - 5s 33ms/step - loss: 0.1053 - accuracy: 0.9650 - precision_10: 0.9666 - val_loss: 0.0532 - val_accuracy: 0.9900 - val_precision_10: 0.9804
train_accuracy = history_decreased_image_size.history["accuracy"]
train_loss = history_decreased_image_size.history["loss"]
train_precision = history_decreased_image_size.history["precision_10"]
val_accuracy = history_decreased_image_size.history["val_accuracy"]
val_loss = history_decreased_image_size.history["val_loss"]
val_precision = history_decreased_image_size.history["val_precision_10"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("decrease_image_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 16ms/step - loss: 0.1189 - accuracy: 0.9600 - precision_10: 0.9423
[0.11893752962350845, 0.9599999785423279, 0.942307710647583]
Decreasing the size of input images by almost 40% resulted in a training accuracy of 0.9650 and a validation accuracy of 0.99. The time it took to complete training and validation for 30 epochs was close to 3 minutes. The test performance is almost identical to the configuration with larger input sizes; the accuracy with the smaller image size is 0.96, while the accuracy with the larger image size is 0.9550.
# Let's load the best performing model
best_model = keras.models.load_model("decrease_image_checkpoint_filepath")
# Data Augmentation Layer
data_augmentation = keras.Sequential(
[
layers.RandomRotation(0.2),
layers.RandomZoom(0.19)
]
)
def densenet(input_shape, n_classes, filters=32):
def bn_rl_conv(x, filters, kernel=1, strides=1):
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
x = layers.Conv2D(filters, kernel, strides=strides, padding='same')(x)
return x
def dense_block(x, repetitions):
for _ in range(repetitions):
y = bn_rl_conv(x, 4 * filters)
y = bn_rl_conv(y, filters, 3)
x = layers.concatenate([x, y])
return x
def transition_layer(x):
x = bn_rl_conv(x, keras.backend.int_shape(x)[-1] // 2)
x = layers.AvgPool2D(2, strides=2, padding='same')(x)
return x
input = keras.Input(shape=(150, 150, 3))
x = data_augmentation(input)
x = layers.Rescaling(1./255)(input)
x = layers.Conv2D(7, 3, strides = 2, padding = 'same')(input)
x = layers.MaxPool2D(4, strides = 2, padding = 'same')(x)
for repetitions in [6, 12, 24, 16]:
d = dense_block(x, repetitions)
x = transition_layer(d)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x) # Updated dropout rate to a more typical value
outputs = layers.Dense(1, activation="sigmoid")(x) # Changed from 1 to n_classes
model = keras.Model(inputs=input, outputs=outputs)
return model
# Example usage
input_shape = (150, 150, 3)
n_classes = 2
dense_net_model = densenet(input_shape, n_classes)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
dense_net_model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="densenet_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_densenet = dense_net_model.fit(train_generator, validation_data = valid_generator, epochs = 50, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/50 150/150 [==============================] - 154s 471ms/step - loss: 1.0819 - accuracy: 0.6383 - precision_14: 0.6301 - val_loss: 1.0697 - val_accuracy: 0.5000 - val_precision_14: 0.5000 Epoch 2/50 150/150 [==============================] - 69s 459ms/step - loss: 0.7669 - accuracy: 0.7067 - precision_14: 0.6902 - val_loss: 0.8574 - val_accuracy: 0.5000 - val_precision_14: 0.5000 Epoch 3/50 150/150 [==============================] - 15s 100ms/step - loss: 0.6293 - accuracy: 0.7667 - precision_14: 0.7516 - val_loss: 13.2370 - val_accuracy: 0.5850 - val_precision_14: 0.5464 Epoch 4/50 150/150 [==============================] - 15s 102ms/step - loss: 0.5938 - accuracy: 0.7483 - precision_14: 0.7350 - val_loss: 0.9348 - val_accuracy: 0.5700 - val_precision_14: 0.5376 Epoch 5/50 150/150 [==============================] - 15s 102ms/step - loss: 0.5425 - accuracy: 0.7850 - precision_14: 0.7680 - val_loss: 5.3569 - val_accuracy: 0.6000 - val_precision_14: 0.5568 Epoch 6/50 150/150 [==============================] - 69s 462ms/step - loss: 0.5367 - accuracy: 0.7683 - precision_14: 0.7477 - val_loss: 0.5228 - val_accuracy: 0.6800 - val_precision_14: 0.6098 Epoch 7/50 150/150 [==============================] - 67s 450ms/step - loss: 0.4354 - accuracy: 0.8233 - precision_14: 0.8212 - val_loss: 0.1272 - val_accuracy: 0.9650 - val_precision_14: 0.9429 Epoch 8/50 150/150 [==============================] - 15s 100ms/step - loss: 0.4544 - accuracy: 0.8367 - precision_14: 0.8137 - val_loss: 0.3169 - val_accuracy: 0.8950 - val_precision_14: 0.8264 Epoch 9/50 150/150 [==============================] - 15s 100ms/step - loss: 0.4140 - accuracy: 0.8533 - precision_14: 0.8136 - val_loss: 0.2239 - val_accuracy: 0.8800 - val_precision_14: 0.9634 Epoch 10/50 150/150 [==============================] - 15s 101ms/step - loss: 0.4539 - accuracy: 0.8333 - precision_14: 0.8165 - val_loss: 0.1701 - val_accuracy: 0.9600 - val_precision_14: 0.9792 Epoch 11/50 150/150 [==============================] - 15s 100ms/step - loss: 0.5097 - accuracy: 0.7900 - precision_14: 0.7843 - val_loss: 0.1638 - val_accuracy: 0.9400 - val_precision_14: 0.9490 Epoch 12/50 150/150 [==============================] - 15s 100ms/step - loss: 0.4602 - accuracy: 0.8067 - precision_14: 0.8087 - val_loss: 0.7410 - val_accuracy: 0.7000 - val_precision_14: 0.6250 Epoch 13/50 150/150 [==============================] - 15s 101ms/step - loss: 0.4192 - accuracy: 0.8183 - precision_14: 0.8173 - val_loss: 0.1477 - val_accuracy: 0.9650 - val_precision_14: 0.9346 Epoch 14/50 150/150 [==============================] - 15s 101ms/step - loss: 0.4483 - accuracy: 0.8200 - precision_14: 0.8097 - val_loss: 0.1985 - val_accuracy: 0.9300 - val_precision_14: 0.8772 Epoch 15/50 150/150 [==============================] - 67s 448ms/step - loss: 0.3779 - accuracy: 0.8550 - precision_14: 0.8610 - val_loss: 0.1144 - val_accuracy: 0.9700 - val_precision_14: 0.9700 Epoch 16/50 150/150 [==============================] - 15s 101ms/step - loss: 0.4337 - accuracy: 0.8217 - precision_14: 0.8206 - val_loss: 0.2445 - val_accuracy: 0.9400 - val_precision_14: 0.8929 Epoch 17/50 150/150 [==============================] - 15s 100ms/step - loss: 0.3450 - accuracy: 0.8717 - precision_14: 0.8729 - val_loss: 0.1789 - val_accuracy: 0.9300 - val_precision_14: 0.8772 Epoch 18/50 150/150 [==============================] - 15s 101ms/step - loss: 0.3972 - accuracy: 0.8383 - precision_14: 0.8587 - val_loss: 0.1224 - val_accuracy: 0.9500 - val_precision_14: 0.9787 Epoch 19/50 150/150 [==============================] - 15s 101ms/step - loss: 0.3249 - accuracy: 0.8767 - precision_14: 0.8870 - val_loss: 0.2599 - val_accuracy: 0.9500 - val_precision_14: 0.9091 Epoch 20/50 150/150 [==============================] - 15s 101ms/step - loss: 0.3423 - accuracy: 0.8617 - precision_14: 0.8703 - val_loss: 0.1404 - val_accuracy: 0.9600 - val_precision_14: 0.9259 Epoch 21/50 150/150 [==============================] - 15s 101ms/step - loss: 0.4046 - accuracy: 0.8500 - precision_14: 0.8523 - val_loss: 0.3873 - val_accuracy: 0.7200 - val_precision_14: 0.9583 Epoch 22/50 150/150 [==============================] - 15s 100ms/step - loss: 0.3407 - accuracy: 0.8550 - precision_14: 0.8711 - val_loss: 0.2897 - val_accuracy: 0.8550 - val_precision_14: 0.7752 Epoch 23/50 150/150 [==============================] - 15s 101ms/step - loss: 0.3335 - accuracy: 0.8650 - precision_14: 0.8815 - val_loss: 0.2158 - val_accuracy: 0.9150 - val_precision_14: 0.8547 Epoch 24/50 150/150 [==============================] - 15s 101ms/step - loss: 0.3205 - accuracy: 0.8783 - precision_14: 0.9068 - val_loss: 0.1467 - val_accuracy: 0.9550 - val_precision_14: 0.9174 Epoch 25/50 150/150 [==============================] - 15s 101ms/step - loss: 0.3175 - accuracy: 0.8767 - precision_14: 0.8924 - val_loss: 0.1245 - val_accuracy: 0.9900 - val_precision_14: 0.9804 Epoch 26/50 150/150 [==============================] - 69s 461ms/step - loss: 0.3219 - accuracy: 0.8800 - precision_14: 0.9101 - val_loss: 0.0833 - val_accuracy: 0.9900 - val_precision_14: 0.9804 Epoch 27/50 150/150 [==============================] - 15s 101ms/step - loss: 0.3646 - accuracy: 0.8733 - precision_14: 0.8944 - val_loss: 0.5541 - val_accuracy: 0.7950 - val_precision_14: 0.7092 Epoch 28/50 150/150 [==============================] - 15s 101ms/step - loss: 0.2813 - accuracy: 0.8850 - precision_14: 0.9053 - val_loss: 0.1305 - val_accuracy: 0.9500 - val_precision_14: 0.9688 Epoch 29/50 150/150 [==============================] - 15s 100ms/step - loss: 0.3512 - accuracy: 0.8600 - precision_14: 0.8942 - val_loss: 0.0835 - val_accuracy: 0.9600 - val_precision_14: 0.9259 Epoch 30/50 150/150 [==============================] - 15s 101ms/step - loss: 0.2801 - accuracy: 0.8917 - precision_14: 0.9181 - val_loss: 0.3763 - val_accuracy: 0.8650 - val_precision_14: 0.7874 Epoch 31/50 150/150 [==============================] - 67s 449ms/step - loss: 0.3375 - accuracy: 0.8667 - precision_14: 0.9074 - val_loss: 0.0397 - val_accuracy: 1.0000 - val_precision_14: 1.0000 Epoch 32/50 150/150 [==============================] - 15s 101ms/step - loss: 0.2642 - accuracy: 0.9017 - precision_14: 0.9288 - val_loss: 0.1702 - val_accuracy: 0.9900 - val_precision_14: 0.9804 Epoch 33/50 150/150 [==============================] - 69s 465ms/step - loss: 0.2441 - accuracy: 0.9167 - precision_14: 0.9464 - val_loss: 0.0203 - val_accuracy: 0.9850 - val_precision_14: 0.9709 Epoch 34/50 150/150 [==============================] - 15s 102ms/step - loss: 0.2301 - accuracy: 0.9033 - precision_14: 0.9291 - val_loss: 0.5500 - val_accuracy: 0.5650 - val_precision_14: 0.9333 Epoch 35/50 150/150 [==============================] - 15s 100ms/step - loss: 0.2651 - accuracy: 0.9050 - precision_14: 0.9355 - val_loss: 0.1699 - val_accuracy: 0.9850 - val_precision_14: 0.9709 Epoch 36/50 150/150 [==============================] - 15s 100ms/step - loss: 0.2782 - accuracy: 0.9000 - precision_14: 0.9317 - val_loss: 0.1615 - val_accuracy: 0.9850 - val_precision_14: 0.9709 Epoch 37/50 150/150 [==============================] - 15s 101ms/step - loss: 0.2400 - accuracy: 0.9183 - precision_14: 0.9466 - val_loss: 0.2450 - val_accuracy: 0.9050 - val_precision_14: 0.8403 Epoch 38/50 150/150 [==============================] - 15s 101ms/step - loss: 0.2377 - accuracy: 0.9117 - precision_14: 0.9364 - val_loss: 0.3609 - val_accuracy: 0.7600 - val_precision_14: 0.6757 Epoch 39/50 150/150 [==============================] - 15s 101ms/step - loss: 0.2762 - accuracy: 0.9017 - precision_14: 0.9199 - val_loss: 0.0789 - val_accuracy: 0.9700 - val_precision_14: 0.9434 Epoch 40/50 150/150 [==============================] - 15s 100ms/step - loss: 0.2104 - accuracy: 0.9233 - precision_14: 0.9472 - val_loss: 1.0651 - val_accuracy: 0.5050 - val_precision_14: 0.5106 Epoch 41/50 150/150 [==============================] - 15s 100ms/step - loss: 0.2173 - accuracy: 0.9200 - precision_14: 0.9532 - val_loss: 0.1442 - val_accuracy: 0.9650 - val_precision_14: 0.9346 Epoch 42/50 150/150 [==============================] - 68s 452ms/step - loss: 0.2139 - accuracy: 0.9250 - precision_14: 0.9537 - val_loss: 0.0094 - val_accuracy: 1.0000 - val_precision_14: 1.0000 Epoch 43/50 150/150 [==============================] - 15s 102ms/step - loss: 0.1740 - accuracy: 0.9333 - precision_14: 0.9514 - val_loss: 0.1388 - val_accuracy: 0.9600 - val_precision_14: 0.9259 Epoch 44/50 150/150 [==============================] - 15s 102ms/step - loss: 0.2099 - accuracy: 0.9300 - precision_14: 0.9542 - val_loss: 0.3056 - val_accuracy: 0.8500 - val_precision_14: 0.7692 Epoch 45/50 150/150 [==============================] - 15s 102ms/step - loss: 0.1951 - accuracy: 0.9400 - precision_14: 0.9714 - val_loss: 0.4211 - val_accuracy: 0.8250 - val_precision_14: 0.7407 Epoch 46/50 150/150 [==============================] - 15s 102ms/step - loss: 0.2201 - accuracy: 0.9150 - precision_14: 0.9278 - val_loss: 0.0362 - val_accuracy: 0.9900 - val_precision_14: 0.9804 Epoch 47/50 150/150 [==============================] - 15s 101ms/step - loss: 0.2002 - accuracy: 0.9350 - precision_14: 0.9579 - val_loss: 0.0593 - val_accuracy: 0.9950 - val_precision_14: 0.9901 Epoch 48/50 150/150 [==============================] - 15s 102ms/step - loss: 0.1732 - accuracy: 0.9367 - precision_14: 0.9486 - val_loss: 0.0500 - val_accuracy: 0.9850 - val_precision_14: 0.9709 Epoch 49/50 150/150 [==============================] - 15s 100ms/step - loss: 0.1641 - accuracy: 0.9367 - precision_14: 0.9679 - val_loss: 0.2703 - val_accuracy: 0.9150 - val_precision_14: 0.8547 Epoch 50/50 150/150 [==============================] - 15s 103ms/step - loss: 0.1578 - accuracy: 0.9417 - precision_14: 0.9617 - val_loss: 0.0487 - val_accuracy: 0.9750 - val_precision_14: 0.9524
train_accuracy = history_densenet.history["accuracy"]
train_loss = history_densenet.history["loss"]
train_precision = history_densenet.history["precision_14"]
val_accuracy = history_densenet.history["val_accuracy"]
val_loss = history_densenet.history["val_loss"]
val_precision = history_densenet.history["val_precision_14"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("densenet_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 4s 20ms/step - loss: 0.0443 - accuracy: 0.9900 - precision_14: 0.9804
[0.04431105777621269, 0.9900000095367432, 0.9803921580314636]
Using the DenseNet model architecture to finetune on a political meme dataset resulted in a training accuracy of 0.9417 and a validation accuracy of 0.9750. The time it took to complete training and validation for 30 epochs was close to 22 minutes. The test performance improved compared to the base configuration; the accuracy with DenseNet architecture is 0.99, while the accuracy with the base configuration is 0.9550.
def resnet_block(input_tensor, filters, kernel_size=3, strides=1):
"""A typical ResNet block with shortcut connections."""
# Shortcut connection to be added back to the main path
shortcut = input_tensor
if input_tensor.shape[-1] != filters or strides != 1:
shortcut = layers.Conv2D(filters, 1, strides=strides, padding='same')(shortcut)
shortcut = layers.BatchNormalization()(shortcut)
# First convolution layer of the block
x = layers.Conv2D(filters, kernel_size, strides=strides, padding='same')(input_tensor)
x = layers.BatchNormalization()(x)
x = layers.Activation('relu')(x)
# Second convolution layer of the block
x = layers.Conv2D(filters, kernel_size, padding='same')(x)
x = layers.BatchNormalization()(x)
# Adding back the shortcut connection to the main path before the final ReLU
x = layers.Add()([x, shortcut])
x = layers.Activation('relu')(x)
return x
def build_resnet(input_shape, n_classes, block_config):
"""Builds a custom ResNet-like architecture using the given block configuration."""
inputs = keras.Input(shape=input_shape)
x = data_augmentation(inputs)
x = layers.Rescaling(1./255)(inputs)
x = layers.Conv2D(64, 7, strides=2, padding='same')(x)
x = layers.BatchNormalization()(x)
x = layers.Activation('relu')(x)
x = layers.MaxPooling2D(3, strides=2, padding='same')(x)
# Process each block configuration
for num_filters, repetitions, stride in block_config:
for _ in range(repetitions):
x = resnet_block(x, num_filters, strides=stride)
stride = 1 # Apply stride only to the first layer in the block
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(1, activation='sigmoid')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
# Example usage
input_shape = (150, 150, 3)
n_classes = 2
block_config = [(64, 2, 1), (128, 2, 2), (256, 2, 2), (512, 2, 2)] # (filters, repetitions, stride)
resnet_model = build_resnet(input_shape, n_classes, block_config)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
resnet_model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="resnet_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_resnet = resnet_model.fit(train_generator, validation_data = valid_generator, epochs = 50, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/50 150/150 [==============================] - 23s 83ms/step - loss: 1.0721 - accuracy: 0.5333 - precision_20: 0.5321 - val_loss: 0.7139 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 2/50 150/150 [==============================] - 5s 30ms/step - loss: 0.7597 - accuracy: 0.6367 - precision_20: 0.6404 - val_loss: 0.9335 - val_accuracy: 0.5000 - val_precision_20: 0.5000 Epoch 3/50 150/150 [==============================] - 4s 27ms/step - loss: 0.7598 - accuracy: 0.6150 - precision_20: 0.6082 - val_loss: 17.2574 - val_accuracy: 0.5100 - val_precision_20: 1.0000 Epoch 4/50 150/150 [==============================] - 4s 28ms/step - loss: 0.7276 - accuracy: 0.6133 - precision_20: 0.5971 - val_loss: 8.3056 - val_accuracy: 0.5100 - val_precision_20: 0.5059 Epoch 5/50 150/150 [==============================] - 4s 28ms/step - loss: 0.8275 - accuracy: 0.4933 - precision_20: 0.4939 - val_loss: 1.3000 - val_accuracy: 0.5700 - val_precision_20: 0.5795 Epoch 6/50 150/150 [==============================] - 4s 28ms/step - loss: 0.7130 - accuracy: 0.5917 - precision_20: 0.5793 - val_loss: 0.7927 - val_accuracy: 0.5000 - val_precision_20: 0.5000 Epoch 7/50 150/150 [==============================] - 4s 28ms/step - loss: 0.6178 - accuracy: 0.6867 - precision_20: 0.6564 - val_loss: 0.7308 - val_accuracy: 0.5900 - val_precision_20: 0.5495 Epoch 8/50 150/150 [==============================] - 11s 76ms/step - loss: 0.6230 - accuracy: 0.6800 - precision_20: 0.6543 - val_loss: 0.5768 - val_accuracy: 0.7600 - val_precision_20: 0.6757 Epoch 9/50 150/150 [==============================] - 15s 98ms/step - loss: 0.5311 - accuracy: 0.7433 - precision_20: 0.7147 - val_loss: 0.4016 - val_accuracy: 0.8200 - val_precision_20: 0.8636 Epoch 10/50 150/150 [==============================] - 4s 30ms/step - loss: 0.5225 - accuracy: 0.7850 - precision_20: 0.7568 - val_loss: 2.6000 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 11/50 150/150 [==============================] - 4s 29ms/step - loss: 0.4463 - accuracy: 0.8150 - precision_20: 0.7838 - val_loss: 0.4956 - val_accuracy: 0.8050 - val_precision_20: 0.7194 Epoch 12/50 150/150 [==============================] - 4s 27ms/step - loss: 0.4128 - accuracy: 0.8350 - precision_20: 0.8170 - val_loss: 0.9440 - val_accuracy: 0.5000 - val_precision_20: 0.5000 Epoch 13/50 150/150 [==============================] - 4s 27ms/step - loss: 0.4162 - accuracy: 0.8383 - precision_20: 0.8222 - val_loss: 0.7672 - val_accuracy: 0.6250 - val_precision_20: 0.6374 Epoch 14/50 150/150 [==============================] - 4s 29ms/step - loss: 0.3786 - accuracy: 0.8400 - precision_20: 0.8228 - val_loss: 1.7414 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 15/50 150/150 [==============================] - 4s 27ms/step - loss: 0.3789 - accuracy: 0.8533 - precision_20: 0.8397 - val_loss: 5.2724 - val_accuracy: 0.5000 - val_precision_20: 0.5000 Epoch 16/50 150/150 [==============================] - 4s 27ms/step - loss: 0.3352 - accuracy: 0.8700 - precision_20: 0.8700 - val_loss: 4.4575 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 17/50 150/150 [==============================] - 4s 28ms/step - loss: 0.3409 - accuracy: 0.8683 - precision_20: 0.8576 - val_loss: 8.3606 - val_accuracy: 0.5000 - val_precision_20: 0.5000 Epoch 18/50 150/150 [==============================] - 4s 27ms/step - loss: 0.3436 - accuracy: 0.8833 - precision_20: 0.8783 - val_loss: 0.7157 - val_accuracy: 0.5850 - val_precision_20: 0.7297 Epoch 19/50 150/150 [==============================] - 4s 27ms/step - loss: 0.3015 - accuracy: 0.8833 - precision_20: 0.8710 - val_loss: 2.3514 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 20/50 150/150 [==============================] - 4s 28ms/step - loss: 0.2922 - accuracy: 0.8967 - precision_20: 0.8889 - val_loss: 5.9413 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 21/50 150/150 [==============================] - 4s 28ms/step - loss: 0.2546 - accuracy: 0.9100 - precision_20: 0.9073 - val_loss: 45.0503 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 22/50 150/150 [==============================] - 4s 28ms/step - loss: 0.2670 - accuracy: 0.8883 - precision_20: 0.8795 - val_loss: 3.0206 - val_accuracy: 0.5000 - val_precision_20: 0.5000 Epoch 23/50 150/150 [==============================] - 4s 28ms/step - loss: 0.2893 - accuracy: 0.8983 - precision_20: 0.8944 - val_loss: 1.4866 - val_accuracy: 0.5950 - val_precision_20: 1.0000 Epoch 24/50 150/150 [==============================] - 4s 27ms/step - loss: 0.2471 - accuracy: 0.8983 - precision_20: 0.8918 - val_loss: 2.2500 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 25/50 150/150 [==============================] - 4s 27ms/step - loss: 0.2548 - accuracy: 0.9083 - precision_20: 0.9070 - val_loss: 1.2184 - val_accuracy: 0.5900 - val_precision_20: 0.8750 Epoch 26/50 150/150 [==============================] - 4s 28ms/step - loss: 0.2098 - accuracy: 0.9350 - precision_20: 0.9336 - val_loss: 3.0694 - val_accuracy: 0.5050 - val_precision_20: 1.0000 Epoch 27/50 150/150 [==============================] - 4s 27ms/step - loss: 0.1948 - accuracy: 0.9317 - precision_20: 0.9164 - val_loss: 2.4850 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 28/50 150/150 [==============================] - 4s 28ms/step - loss: 0.1843 - accuracy: 0.9333 - precision_20: 0.9362 - val_loss: 2.8117 - val_accuracy: 0.5200 - val_precision_20: 0.7500 Epoch 29/50 150/150 [==============================] - 4s 28ms/step - loss: 0.2779 - accuracy: 0.8967 - precision_20: 0.8814 - val_loss: 3.2810 - val_accuracy: 0.5250 - val_precision_20: 1.0000 Epoch 30/50 150/150 [==============================] - 4s 27ms/step - loss: 0.1674 - accuracy: 0.9467 - precision_20: 0.9379 - val_loss: 5.0971 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 31/50 150/150 [==============================] - 4s 28ms/step - loss: 0.1512 - accuracy: 0.9483 - precision_20: 0.9498 - val_loss: 4.5533 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 32/50 150/150 [==============================] - 4s 27ms/step - loss: 0.1213 - accuracy: 0.9500 - precision_20: 0.9383 - val_loss: 10.4924 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 33/50 150/150 [==============================] - 4s 27ms/step - loss: 0.1855 - accuracy: 0.9383 - precision_20: 0.9283 - val_loss: 0.6097 - val_accuracy: 0.8600 - val_precision_20: 0.7812 Epoch 34/50 150/150 [==============================] - 4s 29ms/step - loss: 0.1236 - accuracy: 0.9533 - precision_20: 0.9474 - val_loss: 13.6647 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 35/50 150/150 [==============================] - 5s 31ms/step - loss: 0.1166 - accuracy: 0.9583 - precision_20: 0.9538 - val_loss: 22.5225 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 36/50 150/150 [==============================] - 4s 28ms/step - loss: 0.1600 - accuracy: 0.9450 - precision_20: 0.9435 - val_loss: 16.7092 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 37/50 150/150 [==============================] - 4s 28ms/step - loss: 0.1182 - accuracy: 0.9550 - precision_20: 0.9505 - val_loss: 10.7463 - val_accuracy: 0.5000 - val_precision_20: 0.0000e+00 Epoch 38/50 150/150 [==============================] - 4s 28ms/step - loss: 0.1049 - accuracy: 0.9700 - precision_20: 0.9669 - val_loss: 0.4270 - val_accuracy: 0.8200 - val_precision_20: 0.8019 Epoch 39/50 150/150 [==============================] - 4s 28ms/step - loss: 0.1206 - accuracy: 0.9650 - precision_20: 0.9574 - val_loss: 2.1642 - val_accuracy: 0.6050 - val_precision_20: 0.9565
train_accuracy = history_resnet.history["accuracy"]
train_loss = history_resnet.history["loss"]
train_precision = history_resnet.history["precision_20"]
val_accuracy = history_resnet.history["val_accuracy"]
val_loss = history_resnet.history["val_loss"]
val_precision = history_resnet.history["val_precision_20"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("resnet_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 2s 17ms/step - loss: 0.4244 - accuracy: 0.7500 - precision_20: 0.7907
[0.4244219958782196, 0.75, 0.7906976938247681]
Using the ResNet architecture to fine-tune on the political memes dataset resulted in a training accuracy of 0.97 and a validation accuracy of 0.82. The training and validation for 50 epochs took approximately 3 minutes to complete. However, the test performance decreased compared to the base configuration; the accuracy with the ResNet architecture is 0.75, while the accuracy with the base configuration is 0.96. The model overfitted, and most of the time the validation accuracy hovered around 0.50 over 50 epochs, although occasionally it exceeded 0.75. The combination of a small dataset and a large network architecture is likely causing the lower validation accuracies.
# Define the input shape and number of classes
input_shape = (150, 150, 3)
num_classes = 2
def build_pretrained_resnet(input_shape, n_classes):
base_model = keras.applications.ResNet50(include_top=False, weights='imagenet', input_shape=input_shape)
x = base_model.output
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.5)(x)
predictions = layers.Dense(1, activation='sigmoid')(x)
model = keras.models.Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers[:3]:
layer.trainable = False
return model
# Specify the input shape and number of classes
resnet_pretrained_model = build_pretrained_resnet(input_shape, num_classes)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
resnet_pretrained_model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="resnet_pretrained_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_pretrained_resnet = resnet_pretrained_model.fit(train_generator, validation_data = valid_generator, epochs = 50, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/50 150/150 [==============================] - 62s 198ms/step - loss: 0.8536 - accuracy: 0.7517 - precision_23: 0.7227 - val_loss: 4.4361 - val_accuracy: 0.5000 - val_precision_23: 0.0000e+00 Epoch 2/50 150/150 [==============================] - 30s 200ms/step - loss: 0.4480 - accuracy: 0.8433 - precision_23: 0.8480 - val_loss: 0.7048 - val_accuracy: 0.5000 - val_precision_23: 0.5000 Epoch 3/50 150/150 [==============================] - 30s 203ms/step - loss: 0.4791 - accuracy: 0.7917 - precision_23: 0.7986 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_23: 0.5000 Epoch 4/50 150/150 [==============================] - 8s 56ms/step - loss: 0.4349 - accuracy: 0.8317 - precision_23: 0.8220 - val_loss: 0.7118 - val_accuracy: 0.4750 - val_precision_23: 0.3529 Epoch 5/50 150/150 [==============================] - 8s 52ms/step - loss: 0.4108 - accuracy: 0.8583 - precision_23: 0.8694 - val_loss: 0.7156 - val_accuracy: 0.5000 - val_precision_23: 0.0000e+00 Epoch 6/50 150/150 [==============================] - 8s 52ms/step - loss: 0.4152 - accuracy: 0.8383 - precision_23: 0.8612 - val_loss: 7.1861 - val_accuracy: 0.5000 - val_precision_23: 0.0000e+00 Epoch 7/50 150/150 [==============================] - 8s 52ms/step - loss: 0.3513 - accuracy: 0.8717 - precision_23: 0.9084 - val_loss: 1.6102 - val_accuracy: 0.6600 - val_precision_23: 0.5952 Epoch 8/50 150/150 [==============================] - 28s 184ms/step - loss: 0.3404 - accuracy: 0.8667 - precision_23: 0.8901 - val_loss: 0.3694 - val_accuracy: 0.8600 - val_precision_23: 0.7812 Epoch 9/50 150/150 [==============================] - 31s 204ms/step - loss: 0.3513 - accuracy: 0.8750 - precision_23: 0.9213 - val_loss: 0.0810 - val_accuracy: 0.9850 - val_precision_23: 0.9802 Epoch 10/50 150/150 [==============================] - 8s 56ms/step - loss: 0.3210 - accuracy: 0.8900 - precision_23: 0.9149 - val_loss: 0.1294 - val_accuracy: 0.9650 - val_precision_23: 0.9794 Epoch 11/50 150/150 [==============================] - 30s 202ms/step - loss: 0.3164 - accuracy: 0.8617 - precision_23: 0.8917 - val_loss: 0.0326 - val_accuracy: 0.9950 - val_precision_23: 0.9901 Epoch 12/50 150/150 [==============================] - 8s 56ms/step - loss: 0.3486 - accuracy: 0.8683 - precision_23: 0.8989 - val_loss: 0.2377 - val_accuracy: 0.9150 - val_precision_23: 0.8673 Epoch 13/50 150/150 [==============================] - 8s 52ms/step - loss: 0.3379 - accuracy: 0.8817 - precision_23: 0.9257 - val_loss: 0.3521 - val_accuracy: 0.8000 - val_precision_23: 1.0000 Epoch 14/50 150/150 [==============================] - 28s 184ms/step - loss: 0.3495 - accuracy: 0.8683 - precision_23: 0.9108 - val_loss: 0.0157 - val_accuracy: 1.0000 - val_precision_23: 1.0000 Epoch 15/50 150/150 [==============================] - 8s 56ms/step - loss: 0.2846 - accuracy: 0.9033 - precision_23: 0.9353 - val_loss: 0.0934 - val_accuracy: 0.9900 - val_precision_23: 0.9804 Epoch 16/50 150/150 [==============================] - 8s 51ms/step - loss: 0.2743 - accuracy: 0.9033 - precision_23: 0.9515 - val_loss: 0.0763 - val_accuracy: 0.9950 - val_precision_23: 0.9901 Epoch 17/50 150/150 [==============================] - 8s 50ms/step - loss: 0.2778 - accuracy: 0.8983 - precision_23: 0.9650 - val_loss: 0.2008 - val_accuracy: 0.9550 - val_precision_23: 0.9174 Epoch 18/50 150/150 [==============================] - 8s 50ms/step - loss: 0.2345 - accuracy: 0.9117 - precision_23: 0.9696 - val_loss: 0.4180 - val_accuracy: 0.8600 - val_precision_23: 0.9000 Epoch 19/50 150/150 [==============================] - 8s 52ms/step - loss: 0.2235 - accuracy: 0.9333 - precision_23: 0.9676 - val_loss: 0.0923 - val_accuracy: 0.9850 - val_precision_23: 1.0000 Epoch 20/50 150/150 [==============================] - 30s 202ms/step - loss: 0.2575 - accuracy: 0.9100 - precision_23: 0.9457 - val_loss: 0.0149 - val_accuracy: 0.9950 - val_precision_23: 1.0000 Epoch 21/50 150/150 [==============================] - 8s 55ms/step - loss: 0.2183 - accuracy: 0.9283 - precision_23: 0.9742 - val_loss: 0.2629 - val_accuracy: 0.9450 - val_precision_23: 0.9009 Epoch 22/50 150/150 [==============================] - 8s 50ms/step - loss: 0.2928 - accuracy: 0.9050 - precision_23: 0.9451 - val_loss: 0.0369 - val_accuracy: 1.0000 - val_precision_23: 1.0000 Epoch 23/50 150/150 [==============================] - 8s 52ms/step - loss: 0.2215 - accuracy: 0.9317 - precision_23: 0.9709 - val_loss: 0.0787 - val_accuracy: 0.9900 - val_precision_23: 0.9804 Epoch 24/50 150/150 [==============================] - 8s 52ms/step - loss: 0.2259 - accuracy: 0.9333 - precision_23: 0.9710 - val_loss: 0.1102 - val_accuracy: 0.9950 - val_precision_23: 1.0000 Epoch 25/50 150/150 [==============================] - 8s 51ms/step - loss: 0.1858 - accuracy: 0.9517 - precision_23: 0.9822 - val_loss: 0.1540 - val_accuracy: 0.9700 - val_precision_23: 0.9896 Epoch 26/50 150/150 [==============================] - 8s 52ms/step - loss: 0.2726 - accuracy: 0.9050 - precision_23: 0.9386 - val_loss: 0.0815 - val_accuracy: 0.9900 - val_precision_23: 0.9804 Epoch 27/50 150/150 [==============================] - 8s 52ms/step - loss: 0.2123 - accuracy: 0.9333 - precision_23: 0.9676 - val_loss: 1.3878 - val_accuracy: 0.8300 - val_precision_23: 0.7463 Epoch 28/50 150/150 [==============================] - 30s 201ms/step - loss: 0.1792 - accuracy: 0.9500 - precision_23: 0.9754 - val_loss: 0.0106 - val_accuracy: 1.0000 - val_precision_23: 1.0000 Epoch 29/50 150/150 [==============================] - 9s 57ms/step - loss: 0.2517 - accuracy: 0.9183 - precision_23: 0.9665 - val_loss: 7.4931 - val_accuracy: 0.3350 - val_precision_23: 0.3907 Epoch 30/50 150/150 [==============================] - 8s 51ms/step - loss: 0.1776 - accuracy: 0.9300 - precision_23: 0.9708 - val_loss: 0.0559 - val_accuracy: 0.9900 - val_precision_23: 0.9804 Epoch 31/50 150/150 [==============================] - 8s 52ms/step - loss: 0.1518 - accuracy: 0.9500 - precision_23: 0.9688 - val_loss: 0.0271 - val_accuracy: 1.0000 - val_precision_23: 1.0000 Epoch 32/50 150/150 [==============================] - 8s 51ms/step - loss: 0.1830 - accuracy: 0.9333 - precision_23: 0.9514 - val_loss: 0.0365 - val_accuracy: 0.9950 - val_precision_23: 0.9901 Epoch 33/50 150/150 [==============================] - 8s 52ms/step - loss: 0.1952 - accuracy: 0.9267 - precision_23: 0.9604 - val_loss: 0.3652 - val_accuracy: 0.9600 - val_precision_23: 0.9259 Epoch 34/50 150/150 [==============================] - 8s 52ms/step - loss: 0.1928 - accuracy: 0.9333 - precision_23: 0.9577 - val_loss: 0.0537 - val_accuracy: 0.9850 - val_precision_23: 0.9709 Epoch 35/50 150/150 [==============================] - 8s 52ms/step - loss: 0.2033 - accuracy: 0.9350 - precision_23: 0.9485 - val_loss: 0.6370 - val_accuracy: 0.6150 - val_precision_23: 0.7447 Epoch 36/50 150/150 [==============================] - 8s 52ms/step - loss: 0.1413 - accuracy: 0.9450 - precision_23: 0.9619 - val_loss: 0.9609 - val_accuracy: 0.8550 - val_precision_23: 0.7752 Epoch 37/50 150/150 [==============================] - 8s 52ms/step - loss: 0.1198 - accuracy: 0.9617 - precision_23: 0.9826 - val_loss: 0.0939 - val_accuracy: 0.9850 - val_precision_23: 0.9709 Epoch 38/50 150/150 [==============================] - 8s 52ms/step - loss: 0.1423 - accuracy: 0.9567 - precision_23: 0.9757 - val_loss: 0.2960 - val_accuracy: 0.9550 - val_precision_23: 0.9174 Epoch 39/50 150/150 [==============================] - 8s 52ms/step - loss: 0.1069 - accuracy: 0.9700 - precision_23: 0.9796 - val_loss: 0.0986 - val_accuracy: 0.9800 - val_precision_23: 0.9615 Epoch 40/50 150/150 [==============================] - 8s 52ms/step - loss: 0.1277 - accuracy: 0.9667 - precision_23: 0.9762 - val_loss: 0.0631 - val_accuracy: 0.9700 - val_precision_23: 1.0000 Epoch 41/50 150/150 [==============================] - 8s 51ms/step - loss: 0.1446 - accuracy: 0.9533 - precision_23: 0.9595 - val_loss: 0.1775 - val_accuracy: 0.9600 - val_precision_23: 0.9259 Epoch 42/50 150/150 [==============================] - 8s 51ms/step - loss: 0.0615 - accuracy: 0.9817 - precision_23: 0.9865 - val_loss: 0.0775 - val_accuracy: 0.9950 - val_precision_23: 0.9901 Epoch 43/50 150/150 [==============================] - 8s 51ms/step - loss: 0.0649 - accuracy: 0.9733 - precision_23: 0.9830 - val_loss: 0.2315 - val_accuracy: 0.9450 - val_precision_23: 0.9009 Epoch 44/50 150/150 [==============================] - 8s 51ms/step - loss: 0.1421 - accuracy: 0.9483 - precision_23: 0.9590 - val_loss: 0.2876 - val_accuracy: 0.9800 - val_precision_23: 0.9615 Epoch 45/50 150/150 [==============================] - 8s 52ms/step - loss: 0.0884 - accuracy: 0.9667 - precision_23: 0.9795 - val_loss: 0.0128 - val_accuracy: 1.0000 - val_precision_23: 1.0000 Epoch 46/50 150/150 [==============================] - 8s 52ms/step - loss: 0.1058 - accuracy: 0.9650 - precision_23: 0.9697 - val_loss: 0.4196 - val_accuracy: 0.9350 - val_precision_23: 0.8850 Epoch 47/50 150/150 [==============================] - 8s 51ms/step - loss: 0.0993 - accuracy: 0.9733 - precision_23: 0.9765 - val_loss: 0.1011 - val_accuracy: 0.9800 - val_precision_23: 0.9615 Epoch 48/50 150/150 [==============================] - 8s 51ms/step - loss: 0.0925 - accuracy: 0.9717 - precision_23: 0.9732 - val_loss: 2.2448 - val_accuracy: 0.7650 - val_precision_23: 0.6803 Epoch 49/50 150/150 [==============================] - 8s 52ms/step - loss: 0.0645 - accuracy: 0.9733 - precision_23: 0.9765 - val_loss: 0.0269 - val_accuracy: 0.9950 - val_precision_23: 0.9901 Epoch 50/50 150/150 [==============================] - 8s 51ms/step - loss: 0.0428 - accuracy: 0.9817 - precision_23: 0.9898 - val_loss: 0.1493 - val_accuracy: 0.9600 - val_precision_23: 0.9259
train_accuracy = history_pretrained_resnet.history["accuracy"]
train_loss = history_pretrained_resnet.history["loss"]
train_precision = history_pretrained_resnet.history["precision_23"]
val_accuracy = history_pretrained_resnet.history["val_accuracy"]
val_loss = history_pretrained_resnet.history["val_loss"]
val_precision = history_pretrained_resnet.history["val_precision_23"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("resnet_pretrained_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 2s 18ms/step - loss: 0.0462 - accuracy: 0.9850 - precision_23: 0.9802
[0.04623214900493622, 0.9850000143051147, 0.9801980257034302]
Using the ResNet50 pre-trained model architecture and after fine-tuning on the political meme dataset for 100 epochs, we achieved a training accuracy of 0.9765 and a validation accuracy of 0.9950. The validation accuracy was stuck at 0.50 for the initial 10 epochs, but it increased thereafter. The total time it took to fine-tune the ResNet50 was 10 minutes. However, the test performance increased compared to the base configuration; the accuracy with the ResNet pretrained weights and finetuning is 0.9850, while the accuracy with the base configuration is 0.96.
from keras.applications.vgg16 import preprocess_input
def build_pretrained_vgg16(input_shape, n_classes):
inputs = keras.Input(shape=input_shape)
x = preprocess_input(inputs)
# Load the VGG16 model, pre-trained on ImageNet data
base_model = keras.applications.VGG16(include_top=False, weights='imagenet', input_tensor=x)
x = base_model.output
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.2)(x)
predictions = layers.Dense(1, activation='sigmoid')(x) # Use 'sigmoid' for binary output
model = keras.models.Model(inputs=base_model.input, outputs=predictions)
# Freeze the first few layers to prevent them from being updated during the first phase of training
for layer in base_model.layers[:7]:
layer.trainable = False
return model
# Specify the input shape and number of classes
vgg16_pretrained_model = build_pretrained_vgg16(input_shape, num_classes)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
vgg16_pretrained_model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="vgg_pretrained_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_vgg_pretrained = vgg16_pretrained_model.fit(train_generator, validation_data = valid_generator, epochs = 50, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/50 150/150 [==============================] - 13s 64ms/step - loss: 1.7640 - accuracy: 0.4900 - precision_33: 0.4810 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 2/50 150/150 [==============================] - 4s 28ms/step - loss: 0.6940 - accuracy: 0.4917 - precision_33: 0.4954 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 3/50 150/150 [==============================] - 6s 42ms/step - loss: 0.6933 - accuracy: 0.4983 - precision_33: 0.4991 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 4/50 150/150 [==============================] - 7s 41ms/step - loss: 0.6934 - accuracy: 0.4783 - precision_33: 0.4539 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 5/50 150/150 [==============================] - 4s 28ms/step - loss: 0.6935 - accuracy: 0.4833 - precision_33: 0.4806 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 6/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4950 - precision_33: 0.4948 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 7/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6933 - accuracy: 0.4833 - precision_33: 0.4457 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 8/50 150/150 [==============================] - 6s 41ms/step - loss: 0.6934 - accuracy: 0.4800 - precision_33: 0.4758 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 9/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.5083 - precision_33: 0.6667 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 10/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4983 - precision_33: 0.4990 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 11/50 150/150 [==============================] - 4s 28ms/step - loss: 0.6934 - accuracy: 0.4717 - precision_33: 0.4452 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 12/50 150/150 [==============================] - 4s 27ms/step - loss: 0.6932 - accuracy: 0.4867 - precision_33: 0.4545 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 13/50 150/150 [==============================] - 4s 27ms/step - loss: 0.6933 - accuracy: 0.4667 - precision_33: 0.4787 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 14/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6935 - accuracy: 0.4350 - precision_33: 0.4339 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 15/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4883 - precision_33: 0.4922 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 16/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6935 - accuracy: 0.4850 - precision_33: 0.4906 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 17/50 150/150 [==============================] - 4s 27ms/step - loss: 0.6932 - accuracy: 0.5017 - precision_33: 0.5085 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 18/50 150/150 [==============================] - 4s 27ms/step - loss: 0.6934 - accuracy: 0.4617 - precision_33: 0.4000 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 19/50 150/150 [==============================] - 4s 27ms/step - loss: 0.6933 - accuracy: 0.4600 - precision_33: 0.4592 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 20/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.5000 - precision_33: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 21/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4933 - precision_33: 0.4903 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 22/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4767 - precision_33: 0.4514 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 23/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6933 - accuracy: 0.4917 - precision_33: 0.3684 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 24/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6934 - accuracy: 0.4933 - precision_33: 0.4959 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 25/50 150/150 [==============================] - 4s 28ms/step - loss: 0.6934 - accuracy: 0.4633 - precision_33: 0.4784 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 26/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4967 - precision_33: 0.4921 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 27/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4817 - precision_33: 0.4671 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 28/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6932 - accuracy: 0.4733 - precision_33: 0.3824 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 29/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4900 - precision_33: 0.3636 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 30/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4750 - precision_33: 0.4839 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 31/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6932 - accuracy: 0.4717 - precision_33: 0.4644 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 32/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4800 - precision_33: 0.4790 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 33/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.5000 - precision_33: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 34/50 150/150 [==============================] - 4s 27ms/step - loss: 0.6932 - accuracy: 0.4950 - precision_33: 0.4971 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.5000 Epoch 35/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6932 - accuracy: 0.4983 - precision_33: 0.4989 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 36/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4833 - precision_33: 0.4609 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 37/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6935 - accuracy: 0.4717 - precision_33: 0.4472 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00 Epoch 38/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6932 - accuracy: 0.4767 - precision_33: 0.4757 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_33: 0.0000e+00
train_accuracy = history_vgg_pretrained.history["accuracy"]
train_loss = history_vgg_pretrained.history["loss"]
train_precision = history_vgg_pretrained.history["precision_33"]
val_accuracy = history_vgg_pretrained.history["val_accuracy"]
val_loss = history_vgg_pretrained.history["val_loss"]
val_precision = history_vgg_pretrained.history["val_precision_33"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("vgg_pretrained_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 16ms/step - loss: 0.6931 - accuracy: 0.5000 - precision_33: 0.0000e+00
[0.6931471824645996, 0.5, 0.0]
Using the VGG16 pre-trained model architecture and after fine-tuning on the political meme dataset for 50 epochs, we achieved a training accuracy of 0.4767 and a validation accuracy of 0.5. However, the training and validation accuracies were nearly stuck at 0.5 throughout all epochs, likely due to the limited amount of training data. The total time it took to fine-tune the VGG16 was 2 minutes. Despite these efforts, the test performance decreased compared to the base configuration; the accuracy with the VGG16 pre-trained weights and fine-tuning is 0.5, while the accuracy with the base configuration is 0.96. The next step is to create augmented data to see if the accuracy improves
data_augmentation = keras.Sequential(
[
layers.RandomRotation(0.2), # 20 degrees
layers.RandomTranslation(width_factor=0.2, height_factor=0.0), # 20% shift
layers.RandomBrightness(0.5), # Brightness variation
layers.RandomFlip("horizontal") # Horizontal flip
]
)
from keras.applications.vgg16 import preprocess_input
def build_pretrained_vgg16(input_shape, n_classes):
inputs = keras.Input(shape=input_shape)
x = data_augmentation(inputs)
x = preprocess_input(x)
# Load the VGG16 model, pre-trained on ImageNet data
base_model = keras.applications.VGG16(include_top=False, weights='imagenet', input_tensor=x)
x = base_model.output
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.2)(x)
predictions = layers.Dense(1, activation='sigmoid')(x) # Use 'sigmoid' for binary output
model = keras.models.Model(inputs=base_model.input, outputs=predictions)
# Freeze the first few layers to prevent them from being updated during the first phase of training
for layer in base_model.layers[:7]:
layer.trainable = False
return model
# Specify the input shape and number of classes
vgg16_pretrained_model = build_pretrained_vgg16(input_shape, num_classes)
# Let's compile the CNN model using binary cross_entropy as loss function and adam as optimizer
vgg16_pretrained_model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy', keras.metrics.Precision()])
# Let's define the callbacks for Model saving and Early stopping
cb_check = keras.callbacks.ModelCheckpoint(
filepath="vgg_pretrained_checkpoint_filepath",
save_best_only=True,
monitor="val_loss")
cb_early = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=30
)
# Let's train and validate model on the training and validation data
history_vgg_pretrained = vgg16_pretrained_model.fit(train_generator, validation_data = valid_generator, epochs = 50, verbose = 1, batch_size = 8, callbacks = [cb_check, cb_early])
Epoch 1/50 150/150 [==============================] - 13s 50ms/step - loss: 0.8948 - accuracy: 0.4733 - precision_34: 0.4767 - val_loss: 0.6937 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 2/50 150/150 [==============================] - 8s 51ms/step - loss: 0.6938 - accuracy: 0.5000 - precision_34: 0.5000 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 3/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6937 - accuracy: 0.4933 - precision_34: 0.4965 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 4/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6937 - accuracy: 0.4600 - precision_34: 0.4464 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 5/50 150/150 [==============================] - 7s 46ms/step - loss: 0.6936 - accuracy: 0.4800 - precision_34: 0.4348 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 6/50 150/150 [==============================] - 4s 29ms/step - loss: 0.6936 - accuracy: 0.4983 - precision_34: 0.3333 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 7/50 150/150 [==============================] - 7s 47ms/step - loss: 0.6933 - accuracy: 0.4633 - precision_34: 0.4788 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 8/50 150/150 [==============================] - 7s 48ms/step - loss: 0.6933 - accuracy: 0.4883 - precision_34: 0.4926 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 9/50 150/150 [==============================] - 4s 29ms/step - loss: 0.6935 - accuracy: 0.4833 - precision_34: 0.4615 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 10/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.5017 - precision_34: 0.5263 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 11/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6935 - accuracy: 0.4750 - precision_34: 0.4211 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 12/50 150/150 [==============================] - 7s 46ms/step - loss: 0.6934 - accuracy: 0.4650 - precision_34: 0.4444 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 13/50 150/150 [==============================] - 7s 49ms/step - loss: 0.6933 - accuracy: 0.4917 - precision_34: 0.4921 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 14/50 150/150 [==============================] - 4s 28ms/step - loss: 0.6933 - accuracy: 0.4700 - precision_34: 0.4609 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 15/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4833 - precision_34: 0.4695 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 16/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4817 - precision_34: 0.4779 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 17/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6933 - accuracy: 0.5033 - precision_34: 0.5017 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 18/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6937 - accuracy: 0.4617 - precision_34: 0.4596 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 19/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4833 - precision_34: 0.4800 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 20/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6932 - accuracy: 0.5050 - precision_34: 0.5025 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 21/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4883 - precision_34: 0.4922 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 22/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6934 - accuracy: 0.4733 - precision_34: 0.4831 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 23/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6936 - accuracy: 0.4933 - precision_34: 0.4688 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 24/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4733 - precision_34: 0.4714 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 25/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4850 - precision_34: 0.4791 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 26/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6934 - accuracy: 0.4933 - precision_34: 0.4966 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 27/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4750 - precision_34: 0.4856 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 28/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4500 - precision_34: 0.4385 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 29/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.5050 - precision_34: 0.5047 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 30/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4683 - precision_34: 0.4647 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 31/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6933 - accuracy: 0.4900 - precision_34: 0.4931 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 32/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6932 - accuracy: 0.4950 - precision_34: 0.4681 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 33/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4683 - precision_34: 0.4753 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 34/50 150/150 [==============================] - 7s 46ms/step - loss: 0.6933 - accuracy: 0.4867 - precision_34: 0.4892 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 35/50 150/150 [==============================] - 4s 29ms/step - loss: 0.6933 - accuracy: 0.5133 - precision_34: 0.5250 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 36/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4767 - precision_34: 0.4765 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 37/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6933 - accuracy: 0.4800 - precision_34: 0.4784 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 38/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6932 - accuracy: 0.4983 - precision_34: 0.4955 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 39/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.5000 - precision_34: 0.5000 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 40/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4817 - precision_34: 0.4904 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 41/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6933 - accuracy: 0.4817 - precision_34: 0.4466 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 42/50 150/150 [==============================] - 11s 70ms/step - loss: 0.6933 - accuracy: 0.4533 - precision_34: 0.4444 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 43/50 150/150 [==============================] - 4s 29ms/step - loss: 0.6934 - accuracy: 0.4900 - precision_34: 0.4944 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 44/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6933 - accuracy: 0.4900 - precision_34: 0.4848 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 45/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6933 - accuracy: 0.4950 - precision_34: 0.4970 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 46/50 150/150 [==============================] - 4s 28ms/step - loss: 0.6933 - accuracy: 0.4967 - precision_34: 0.2500 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 47/50 150/150 [==============================] - 4s 27ms/step - loss: 0.6933 - accuracy: 0.4783 - precision_34: 0.4777 - val_loss: 0.6932 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 48/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6934 - accuracy: 0.4783 - precision_34: 0.4812 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.5000 Epoch 49/50 150/150 [==============================] - 4s 26ms/step - loss: 0.6935 - accuracy: 0.4467 - precision_34: 0.4080 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00 Epoch 50/50 150/150 [==============================] - 4s 25ms/step - loss: 0.6933 - accuracy: 0.4600 - precision_34: 0.4469 - val_loss: 0.6931 - val_accuracy: 0.5000 - val_precision_34: 0.0000e+00
train_accuracy = history_vgg_pretrained.history["accuracy"]
train_loss = history_vgg_pretrained.history["loss"]
train_precision = history_vgg_pretrained.history["precision_34"]
val_accuracy = history_vgg_pretrained.history["val_accuracy"]
val_loss = history_vgg_pretrained.history["val_loss"]
val_precision = history_vgg_pretrained.history["val_precision_34"]
epochs = range(1, len(train_accuracy) + 1)
plt.plot(epochs, train_accuracy, "bo", label="Training accuracy")
plt.title("Training Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, train_loss, "bo", label="Training loss")
plt.title("Training Loss")
plt.legend()
plt.show()
plt.plot(epochs, train_precision, "bo", label="Training precision")
plt.title("Training Precision")
plt.legend()
plt.show()
plt.plot(epochs, val_accuracy, "bo", label="Validation accuracy")
plt.title("Validation Accuracy")
plt.legend()
plt.figure()
plt.plot(epochs, val_loss, "bo", label="Validation loss")
plt.title("Validation Loss")
plt.legend()
plt.show()
plt.plot(epochs, val_precision, "bo", label="Validation precision")
plt.title("Validation Precision")
plt.legend()
plt.show()
# Let's loads the best-performing model and evaluate on the test data
model = keras.models.load_model("vgg_pretrained_checkpoint_filepath")
model.evaluate(test_generator)
50/50 [==============================] - 1s 16ms/step - loss: 0.6931 - accuracy: 0.5000 - precision_34: 0.5000
[0.6931473016738892, 0.5, 0.5]
Using the VGG16 pre-trained model architecture with data augmentation and after fine-tuning on the political meme dataset for 50 epochs, we achieved a training accuracy of 0.4600 and a validation accuracy of 0.5. However, the training and validation accuracies were nearly stuck at 0.50 throughout all epochs, likely due to the limited amount of training data. The total time it took to fine-tune the VGG16 was 2 minutes. Despite these efforts, the test performance decreased compared to the base configuration; the accuracy with the VGG16 pre-trained weights and fine-tuning is 0.5, while the accuracy with the base configuration is 0.96.
| Model Description | Validation Accuracy | Test Accuracy |
|---|---|---|
| DenseNet - Powerful Network Architecture | 0.9750 | 0.99 |
| ResNet - Powerful Network Architecture | 0.82 | 0.75 |
| ResNet50 - Pretrained Model | 0.9950 | 0.9850 |
| VGG16 - Pretrained Model | 0.5 | 0.5 |
| VGG16 - Pretrained Model (Data Augmentation) | 0.5 | 0.5 |